Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness

نویسندگان

چکیده

We present a new algorithm to train robust malware detector. Malware is prolific problem and detectors are front-line defense. Modern rely on machine learning algorithms. Now, the adversarial objective devise alterations code decrease chance of being detected whilst preserving functionality realism malware. Adversarial effective in improving robustness but generating functional realistic samples non-trivial. Because: i) contrast tasks capable using gradient-based feedback, domain without differentiable mapping function from space (malware inputs) feature hard; ii) it difficult ensure functional. This presents challenge for developing scalable algorithms large datasets at production or commercial scale realize detectors. propose an alternative; perform space. prove projection perturbed, yet valid malware, into will always be subset adversarials generated Hence, by network against feature-space examples, we inherently achieve problem-space examples. formulate Bayesian that captures distribution models improved robustness. To explain algorithm, our method bounds difference between risk empirical improves show neural networks (BNNs) state-of-the-art results; especially False Positive Rate (FPR) regime. Adversarially trained BNNs Notably, adversarially stronger attacks with larger attack budgets margin up 15% recent production-scale dataset more than 20 million samples. Importantly, efforts create benchmark future defenses domain.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Feature Learning

The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to “linearize semantics” in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representation...

متن کامل

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimiza...

متن کامل

Defending Non-Bayesian Learning against Adversarial Attacks

Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local le...

متن کامل

Adversarial Examples for Malware Detection

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detecti...

متن کامل

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

Machine-learning methods have already been exploited as useful tools for detecting malicious executable files. They leverage data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i12.26727